We can consider searching as an agent-based task, for which the agent finds a sequence of actions which transform the environment into the goal state.
While the agent executes the solution, it ignores the perception stage.
This is possible if the environment is:
This is not possible with chess, because the other player can change the environment, so we must perceive it again.
We have a State Space(set) which contains all the possible states of the environment.
We have a Successor function that maps actions to their cost.
We also have a Start state and a Goal state.
A mathematical representation of a search problem.
In this graph:
For tree search problems we have:
The algorithm is ... if ...:
Then we also have:
In order to continue, we need to define some variables:
The strategy for this tree search is: Always expand shallowest node.
The strategy for this tree search is: Always expand deepest node.
From this point on, nodes can have different costs.
The strategy for this tree search is: Always expand to the cheapest node.
We basically order the frontier by the path cost function for each node. Then we expand on the first one.
Expand node closest to goal according to a heuristic function.
Expand node that has lower cost according to this cost function